Contrastive Learning with the Feature Reconstruction Amplifier
نویسندگان
چکیده
Contrastive learning has emerged as one of the most promising self-supervised methods. It can efficiently learn transferable representations samples through instance-level discrimination task. In general, performance contrastive method be further improved by projecting high-dimensional into low-dimensional feature space. This is because model more abstract discriminative information. However, when features cannot provide sufficient information to (e.g., are very similar each other), existing will limited a great extent. Therefore, in this paper, we propose general module called Feature Reconstruction Amplifier (FRA) for adding additional model. Specifically, FRA reconstructs embeddings with Gaussian noise vectors and projects them reconstruction space, add designed loss. We have verified effectiveness itself exhaustive ablation experiments. addition, perform linear evaluation transfer on five common visual datasets, experimental results demonstrate that our superior recent advanced
منابع مشابه
Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA
Nonlinear independent component analysis (ICA) provides an appealing framework for unsupervised feature learning, but the models proposed so far are not identifiable. Here, we first propose a new intuitive principle of unsupervised deep learning from time series which uses the nonstationary structure of the data. Our learning principle, time-contrastive learning (TCL), finds a representation wh...
متن کاملICA with Reconstruction Cost for Efficient Overcomplete Feature Learning
Independent Components Analysis (ICA) and its variants have been successfully used for unsupervised feature learning. However, standard ICA requires an orthonoramlity constraint to be enforced, which makes it difficult to learn overcomplete features. In addition, ICA is sensitive to whitening. These properties make it challenging to scale ICA to high dimensional data. In this paper, we propose ...
متن کاملOn Contrastive Divergence Learning
Maximum-likelihood (ML) learning of Markov random fields is challenging because it requires estimates of averages that have an exponential number of terms. Markov chain Monte Carlo methods typically take a long time to converge on unbiased estimates, but Hinton (2002) showed that if the Markov chain is only run for a few steps, the learning can still work well and it approximately minimizes a d...
متن کاملBounding the Bias of Contrastive Divergence Learning
Optimization based on k-step contrastive divergence (CD) has become a common way to train restricted Boltzmann machines (RBMs). The k-step CD is a biased estimator of the log-likelihood gradient relying on Gibbs sampling. We derive a new upper bound for this bias. Its magnitude depends on k, the number of variables in the RBM, and the maximum change in energy that can be produced by changing a ...
متن کاملContrastive Learning for Image Captioning
Image captioning, a popular topic in computer vision, has achieved substantial progress in recent years. However, the distinctiveness of natural descriptions is often overlooked in previous work. It is closely related to the quality of captions, as distinctive captions are more likely to describe images with their unique aspects. In this work, we propose a new learning method, Contrastive Learn...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i6.25887